-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multiple proxies added #1
base: master
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for the fast PR!
Left a few comments/questions about which I'd like to hear your opinion.
pallets/collective-proxy/src/lib.rs
Outdated
proxy: T::AccountId, | ||
filter: T::CallFilter, | ||
) -> DispatchResult { | ||
T::CollectiveProxy::ensure_origin(origin)?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The collective proxy origin can register arbitrary account.
Security-wise, what do you think about this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, true. It's a security flaw. AFAIU the current pallet design also has it, right? In order to address it, the major change of the pallet needed. Proxy pallet uses announcements mechanism to provide the ability to cancel "proxying". So it's also the tradeoff: should we keep current minimalist design for the pallet or we make it more secure. I assumed that my task was in the scope of the first approach.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe we're not thinking about the same flaw :)
The current pallet's custom CollectiveProxy
origin filter is hardcoded in the runtime. Runtime upgrade is a heavily privileged operation, and cannot be done easily.
On the other hand, in this new extrinsic call, the collective proxy origin can register any new filter they want.
E.g. they can decide to create a proxy to my own account, and execute any action they want (based on the filter). No one would like that 🙂.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
True, true. So bringing announcements into the design? Are there any other means to prevent it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Having an announcement delay is good feature.
Keep in mind though that this pallet is used currently only with private accounts, so even if we had announcements, it wouldn't improve the current situation.
For the suggestions on how to improve this, I have two:
- Keeping your approach, introduce a special origin type that can register a new proxy. E.g. make it root-only.
- Taking a different approach, expand the initial pallet code to allow registering (or defining) multiple account Ids & filters. E.g. something like a
type
that implementsGet<Vec<(Origin, AccountId, CallFilter)>
. That way the same security level as it is now is kept.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Implemented the first approach. Corresponding test also added.
pallets/collective-proxy/src/lib.rs
Outdated
@@ -98,19 +152,22 @@ pub mod pallet { | |||
})] | |||
pub fn execute_call( | |||
origin: OriginFor<T>, | |||
filter: Option<T::CallFilter>, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Interesting argument choice - why not choose T::AccountId
instead?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's the design choice that I had to make. As I've said in PR's description, I tried to make minimal changes into the pallet that would add multiple proxies support but would keep its current design as much as possible (because I don't know details about the use case). I tried to make assumptions on use case and imagine, how pallet would be used basing on its current design. So I decided to go this way (proxy calls based on the needed filters). But again it's only my assumption how it would be used. And obviously it's pretty straightforward to rework it into the opposite way (proxy call based on needed proxy account).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The purpose of the pallet was to allow non-EoA accounts to act on behalf of a custom origin type.
Practically, this is used for pallet-collective origins for implementing governance.
You can check this in one of the runtimes, e.g. shibuya-runtime
.
My comment here was more related to why choose the filter as argument, and not the account Id.
It seems very strange to specify a call filter from which the proxy account is derived - IMO it would make it cleaner if user directly specifies the account Id on whose behalf they want to execute a call.
Based on the account Id argument, you check the call filter & decide whether call is allowed or not.
That's how pallet-proxy
does anyways.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the context! I looked through runtime and the usage is more clear now. I reworked to AccountId.
proxies.try_push(proxy_def).map_err(|_| Error::<T>::TooManyProxies)?; | ||
Ok(()) | ||
}) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What happens if there are multiple Proxies for the same account Id?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
They differ by calling filter and the first found with the suitable filter will be used. I think, I relied this logic on the base proxy pallet
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I understand it's implemented like that.
But since CollectiveProxy
can register any filter they want, for any account they want, what is the use case for having multiple ProxyDefinition
objects with e.g. the same account Id but different privileges?
If one privilege is superset of the other, why would proxy ever use the less privileged one?
To follow-up on my previous comment - one thing to consider here is the data struct used to store the values. Like it is now, we can have vector full of essentially the same values. Is this good for the functionality?
Would you suggest a change here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Aa, I got your concern. I think, it can be easily addressed with superset validation that would prevent the case you're describing (the change added).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That would help, yes, but you can also replace the existing vector with a Map-like collection. E.g. if key is the account Id, then it's not possible to have duplicates.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah. I did experiments with such structures (ordering leftovers were caused by them) but decided that storage costs are not worth it and used simple vec eventually. Added code will prevent any redundant duplication.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see!
Out of curiosity, how much less performant was the BTreeMap
(I'm assuming you tried using that 🙂 )?
Back to my suggestion above, performance wise, this should be equivalent or faster than the vec approach:
#[pallet::storage]
pub type Proxies<T: Config> = StorageMap<
_,
(T::CollectiveProxy, T::AccountId),
T::CallFilter,
ValueQuery,
>;
The T::CollectiveProxy
would be extended to be a parameter-like type.
The T::CallFilter
would follow your approach.
This way you can support more than 1 collective proxy type, each proxy can delegate to multiple accounts, but only one delegation type is possible.
Anyways, just something of top of my hat, haven't tried or implemented it 🙂
Thank you for all the replies & the effort.
Except for the one at the start of this comment, no more questions from me :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was considering standard BoundedBTreeMap. I don't think, there is any significant difference in performance for such small sets of data (adding to equation codec's complexity makes things even more complicated). My concern was mostly about the additional storage costs for TreeMap. I don't remember already but there was a general optimization hint to use compact storage structures as much as possible.
Thank you for your comments and questions! It helped me a lot to understand better the use case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for the submission, I left few comments/questions.
pallets/collective-proxy/src/lib.rs
Outdated
) -> DispatchResult { | ||
T::ProxyAdmin::ensure_origin(origin)?; | ||
Proxies::<T>::try_mutate(|proxies| -> Result<(), DispatchError> { | ||
if !proxies.iter().any(|p| p.proxy == proxy && p.filter.is_superset(&filter)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The default implementation of is_superset
returns false if no custom logic is provided - here.
Imagine, if Proxies
contains:
Proxies = [
ProxyDefinition { proxy: Account1, filter: Filter1 }
]
and you add Filter2, which is a superset of Filter1, the add_proxy
logic will push it into Proxies
:
Proxies = [
ProxyDefinition { proxy: Account1, filter: Filter1 },
ProxyDefinition { proxy: Account1, filter: Filter2 }
]
However, the find_proxy
will still return the outdated proxy:
ProxyDefinition { proxy: Account1, filter: Filter1 }
Maybe this could lead to incorrect permissions being applied. Adding a test case to verify and address this scenario would be highly beneficial.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey Igor! I reworked storage to map to address your (and Dino's) comments.
/// - `filter`: Call filter used for the proxy | ||
#[pallet::call_index(1)] | ||
#[pallet::weight(T::WeightInfo::add_proxy(T::MaxProxies::get()))] | ||
pub fn add_proxy( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What happens when the ProxyAdmin tries to overwrite an existing proxy with a stricter filter? (e.g., temporary restriction during maintenance/emergency)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reworked to map
pallets/collective-proxy/src/lib.rs
Outdated
|
||
/// The set of account proxies | ||
#[pallet::storage] | ||
pub type Proxies<T: Config> = StorageValue< |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would you consider using a StorageMap
, as Dino suggested? This can enforce uniqueness and efficient lookups. Also, it could simplify logic ensuring no redundant filters are stored.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, reworked
In my solution I tried to minimize the changes in the pallet's API and only add minimal support for several proxy accounts. I don't know the usecase that's why I decided to keep the current design with one origin (and just add multiple proxies for it). Of course it would be possible to extend the pallet even further and add map <Origin, Vec> but it would change the pallet too dramatically and may be it would be better to create just a new one instead.
TODO: